受到正规彩票假说(RLTH)的启发,该假说假设在密集网络中存在平稳(非二进制)子网,以实现密集网络的竞争性能,我们提出了几个播放类增量学习(FSCIL)方法。 to as \ emph {soft-subnetworks(softnet)}。我们的目标是逐步学习一系列会议,每个会议在每个课程中只包含一些培训实例,同时保留了先前学到的知识。软网络在基本训练会议上共同学习模型权重和自适应非二进制软面具,每个面具由主要和次要子网组成;前者的目的是最大程度地减少训练期间的灾难性遗忘,而后者的目的是避免在每个新培训课程中过度拟合一些样本。我们提供了全面的经验验证,表明我们的软网络通过超越基准数据集的最先进基准的性能来有效地解决了几个弹药的学习问题。
translated by 谷歌翻译
Key Point Analysis(KPA) is a relatively new task in NLP that combines summarization and classification by extracting argumentative key points (KPs) for a topic from a collection of texts and categorizing their closeness to the different arguments. In our work, we focus on the legal domain and develop methods that identify and extract KPs from premises derived from texts of judgments. The first method is an adaptation to an existing state-of-the-art method, and the two others are new methods that we developed from scratch. We present our methods and examples of their outputs, as well a comparison between them. The full evaluation of our results is done in the matching task -- match between the generated KPs to arguments (premises).
translated by 谷歌翻译
Neural information retrieval (IR) systems have progressed rapidly in recent years, in large part due to the release of publicly available benchmarking tasks. Unfortunately, some dimensions of this progress are illusory: the majority of the popular IR benchmarks today focus exclusively on downstream task accuracy and thus conceal the costs incurred by systems that trade away efficiency for quality. Latency, hardware cost, and other efficiency considerations are paramount to the deployment of IR systems in user-facing settings. We propose that IR benchmarks structure their evaluation methodology to include not only metrics of accuracy, but also efficiency considerations such as a query latency and the corresponding cost budget for a reproducible hardware setting. For the popular IR benchmarks MS MARCO and XOR-TyDi, we show how the best choice of IR system varies according to how these efficiency considerations are chosen and weighed. We hope that future benchmarks will adopt these guidelines toward more holistic IR evaluation.
translated by 谷歌翻译
DNA-Encoded Library (DEL) technology has enabled significant advances in hit identification by enabling efficient testing of combinatorially-generated molecular libraries. DEL screens measure protein binding affinity though sequencing reads of molecules tagged with unique DNA-barcodes that survive a series of selection experiments. Computational models have been deployed to learn the latent binding affinities that are correlated to the sequenced count data; however, this correlation is often obfuscated by various sources of noise introduced in its complicated data-generation process. In order to denoise DEL count data and screen for molecules with good binding affinity, computational models require the correct assumptions in their modeling structure to capture the correct signals underlying the data. Recent advances in DEL models have focused on probabilistic formulations of count data, but existing approaches have thus far been limited to only utilizing 2-D molecule-level representations. We introduce a new paradigm, DEL-Dock, that combines ligand-based descriptors with 3-D spatial information from docked protein-ligand complexes. 3-D spatial information allows our model to learn over the actual binding modality rather than using only structured-based information of the ligand. We show that our model is capable of effectively denoising DEL count data to predict molecule enrichment scores that are better correlated with experimental binding affinity measurements compared to prior works. Moreover, by learning over a collection of docked poses we demonstrate that our model, trained only on DEL data, implicitly learns to perform good docking pose selection without requiring external supervision from expensive-to-source protein crystal structures.
translated by 谷歌翻译
Fine-tuning pre-trained language models (PLMs) achieves impressive performance on a range of downstream tasks, and their sizes have consequently been getting bigger. Since a different copy of the model is required for each task, this paradigm is infeasible for storage-constrained edge devices like mobile phones. In this paper, we propose SPARTAN, a parameter efficient (PE) and computationally fast architecture for edge devices that adds hierarchically organized sparse memory after each Transformer layer. SPARTAN freezes the PLM parameters and fine-tunes only its memory, thus significantly reducing storage costs by re-using the PLM backbone for different tasks. SPARTAN contains two levels of memory, with only a sparse subset of parents being chosen in the first level for each input, and children cells corresponding to those parents being used to compute an output representation. This sparsity combined with other architecture optimizations improves SPARTAN's throughput by over 90% during inference on a Raspberry Pi 4 when compared to PE baselines (adapters) while also outperforming the latter by 0.1 points on the GLUE benchmark. Further, it can be trained 34% faster in a few-shot setting, while performing within 0.9 points of adapters. Qualitative analysis shows that different parent cells in SPARTAN specialize in different topics, thus dividing responsibility efficiently.
translated by 谷歌翻译
Granger因果关系(GC)检验是一种著名的统计假设检验,用于研究一个时期的过去是否影响了另一个时间的未来。它有助于回答一个问题序列是否有助于预测。 Granger因果关系检测的标准传统方法通常假设线性动力学,但是这种简化在许多现实世界应用中不存在,例如,神经科学或基因组学本质上是非线性的。在这种情况下,施加线性模型,例如向量自回旋(VAR)模型可能会导致对真正的Granger因果相互作用的不一致估计。机器学习(ML)可以学习数据集中的隐藏模式(DL)在学习复杂系统的非线性动力学方面表现出巨大的希望。 Tank等人的最新工作建议通过使用神经网络结合对可学习的权重的稀疏性惩罚来克服VAR模型中线性简化的问题。在这项工作中,我们基于Tank等人引入的想法。我们提出了几类新的模型,这些模型可以处理潜在的非线性。首先,我们介绍了学识渊博的内核var(lekvar)模型 - var模型的扩展,这些模型也学习了通过神经网络参数的内核。其次,我们表明可以通过脱钩的惩罚直接将滞后和单个时间序列的重要性分解。这种去耦提供了更好的缩放,并使我们可以将滞后选择嵌入RNN中。最后,我们提出了一种支持迷你批次的新培训算法,并且它与常用的自适应优化器(例如Adam)兼容。癫痫患者的电脑电图(EEG)数据研究了在19个EEG通道之前,期间和之后的GC演变。
translated by 谷歌翻译
近年来,由于3D数据收集和深度学习技术的进步,对点云的3D对象检测已取得了重大进展。然而,3D场景表现出很多变化,并且容易出现传感器的不准确性以及预处理过程中的信息丢失。因此,对于针对这些变化的设计技术至关重要。这需要详细的分析和理解此类变化的影响。这项工作旨在分析和基准基于流行的基于点的3D对象检测器,以针对几个数据损坏。据我们所知,我们是第一个研究基于点的3D对象探测器的鲁棒性的人。为此,我们设计和评估涉及数据添加,减少和更改的损坏。我们进一步研究了不同模块对局部和全球变化的鲁棒性。我们的实验结果揭示了一些有趣的发现。例如,与在点级别上使用变压器相比,我们表明在补丁或对象级别集成变压器的方法会增加鲁棒性。
translated by 谷歌翻译
近年来,已经出现了许多巨魔帐户来操纵社交媒体的意见。对于社交网络平台而言,检测和消除巨魔是一个关键问题,因为企业,滥用者和民族国家赞助的巨魔农场使用虚假和自动化的帐户。 NLP技术用于从社交网络文本中提取数据,例如Twitter推文。在许多文本处理应用程序中,诸如BERT之类的单词嵌入表示方法的执行效果要好于先前的NLP技术,从而为各种任务提供了新颖的突破,以精确理解和分类社交网络工作信息。本文实施并比较了九个基于深度学习的巨魔推文检测体系结构,每个bert,elmo和手套词嵌入模型的三个模型。精度,召回,F1分数,AUC和分类精度用于评估每个体系结构。从实验结果中,大多数使用BERT模型的架构改进了巨魔推文检测。具有GRU分类器的基于自定义的基于ELMO的体系结构具有检测巨魔消息的最高AUC。所提出的体系结构可以由各种基于社会的系统用于未来检测巨魔消息。
translated by 谷歌翻译
最近的机器阅读理解数据集包括提取和布尔值问题,但当前的方法并未为回答这两种问题类型提供综合支持。我们提出了一个多语言的机器阅读理解系统和前端演示,该演示通过提供“是/否答案”并突出支持证据,并通过突出段落中的答案来处理提取性问题,从而解决布尔值。在撰写本文时,我们的系统GAAMA 2.0在TYDI QA排行榜上排名第一。我们对比了我们方法的两种不同的实现。第一个包括几个独立的变压器堆栈,可以轻松部署每个组件。第二个是使用适配器来减少资源约束环境中GPU内存足迹的单一堆栈。
translated by 谷歌翻译
电子邮件是通信最广泛的方法之一,数以百万计的人和企业每天依靠它来交流和分享知识和信息。然而,近年来,电子邮件用户的增长量增加了垃圾邮件的急剧增加。适当地为个人和公司进行处理和管理电子邮件变得越来越困难。本文提出了一种用于电子邮件垃圾邮件检测的新技术,该技术基于卷积神经网络,封闭式复发单元和注意机制的组合。在系统培训期间,网络选择性地关注电子邮件文本的必要部分。卷积层的用法是通过层次表示提取更有意义,抽象和可推广的特征,这是本研究的主要贡献。此外,此贡献还包括交叉数据集评估,从而使模型培训数据集产生了更多独立的绩效。根据跨数据库评估结果,该提出的技术通过使用时间卷积来推动基于注意力的技术的结果,这使我们使用了更灵活的接收场大小。将建议的技术的发现与最先进的模型的发现进行了比较,并表明我们的方法表现优于它们。
translated by 谷歌翻译